Previous Page TOC Next Page


11 — Performance Tuning

by Keith D. Brophy and Tim Koets

If you've done much programming in Visual Basic, you have likely realized that performance gremlins are a frequent companion of the Visual Basic developer. However, it is possible to write efficient, high-performance Visual Basic programs. Better yet, this is achievable by the ordinary programmer, not just the Visual Basic guru who has been weaned on the Windows API. The world of performance encompasses such a vast number of topics and considerations that it could fill a large book in its own right. This chapter opens a door to this world by providing a look at some of the most fundamental Visual Basic performance considerations that every programmer should be aware of.


Note

Sams offers an excellent comprehensive book on performance tuning, Visual Basic Performance Tuning and Optimization, by Keith Brophy and Tim Koets. This book covers a much wider range of performance issues than can be addressed here and is recommended for those who wish to have an in-depth understanding of performance tuning.

When one sets out to slay performance gremlins and eliminate performance bottlenecks, it helps to have an overall understanding of performance tuning strategy. That is the focus of the first section of this chapter. A discussion of the Visual Basic Performance Paradox leads to the question, "What is performance tuning?" Likewise, the accompanying questions of when to optimize, where to optimize, and how to measure performance are discussed.

Armed with this conceptual background, we can tackle specific performance areas. An area of interest to most programmers is in the section "VB4 Considerations"—a look at specific performance tuning issues that arise with the groundbreaking, 32-bit version of the Visual Basic development tool. Other key areas addressed include related performance tuning issues as discussed in sections concerning variables, code structure, control loading, control properties and methods, math, graphics methods, and picture controls.

The material in this chapter will provide you with a fundamental understanding of basic performance issues. You may not be able to dodge every performance gremlin that pops up in your program, but you will be able to eliminate some of the most persistent ones, and you can look the remainder square in the eye with a better understanding of why they stalk you!

What Is Performance?

Application performance is traditionally thought of as the amount of time a user must wait for a program action to take place. For example, the time it takes a program at startup to fully display the first window the user can interact with, the time it takes to retrieve database data and update a display after a user clicks on a command button, or the amount of time it takes to carry out a series of complex financial calculations and show the final result are all speed issues. Performance tuning is aimed at providing better programs to the end user. The process of making changes to improve speed or robustness is commonly called performance tuning.

The Cost of Poor Performance

The end user pays for poor performance. In some cases, if the end user is frustrated enough, the developer and company, too, may ultimately pay when the end user refuses to use the product. Perhaps more common than such a dramatic case, however, is the situation in which the end user tolerates a poorly performing program day after day. The developer may be on to the next product, happily coding away, believing "That widget application I did last year was awesome!" The end user, on the other hand, may spend most coffee breaks complaining to co-workers about the fact that "It takes that dumb widget program five minutes just to load!" The bottom line is if the end user perceives he has a performance problem, he does have a problem. This isn't to say the problem can necessarily be fixed, but the ultimate judge of determining acceptable performance is the end user.

User's Psychological Perception of Performance

The developer must often act as an advocate for the user in gauging performance, because the developer knows (or at least he thinks he knows) what is and is not technically achievable. Likewise, a user's satisfaction point is, perhaps, unreachable, because a user's ideal performance speed would be a zero wait time! However, if an application is usable, performs well enough that users are content to use it, and has taken advantage of available technology to provide the fastest performance reasonably achievable, it can be viewed as an application with adequate performance.

The decision of when this goal is reached is subjective. There is a real danger in not representing the user's viewpoint when considering performance in this manner. For example, a developer of a recipe-management application may consider performance just fine when a hierarchical bitmap-based listbox loads in less than 10 seconds. However, if the developer reached that conclusion based on a sample database of 50 recipes, but then the project ships to customers who work with databases of 50,000 recipes, they may feel differently. When the listbox takes 15 minutes to load for customers, performance would not have been successfully addressed from their standpoint.

A user's outlook on the performance of an application is also shaped largely by expectations and past experience. If the same application is provided to two businesses, workers may react to it very differently. If one business has just switched its workers from mainframe-connected terminals to PCs, and your new application is the first PC application they have ever used, they may be delighted with the performance. Compared to their terminal experience, it seems to blaze if they only have to wait five seconds to retrieve data instead of the 30 seconds they were used to.

On the other hand, if the other business has a group of users accustomed to working with fast local spreadsheets on the PC, and then they are introduced to a networked PC program that takes five seconds to retrieve data, they may be very unhappy. In the first case, the developer should consider himself fortunate. In the second case, the users should be educated to realize that network database applications have performance that is not on a par with local applications so they can downscale their expectations to be more realistic.

In many cases, even if there are performance constraints the developer can do nothing to fix, the satisfaction level of the user can still be increased by distracting them during times of system inactivity. For example, displaying a splash screen, or colorful startup screen, in front of the user can give him something to look at and divert his attention while a more complex form with many controls loads.

Tactics such as assessing performance from the standpoint of real-world use, educating the user in order to shape expectations, and distracting the user from performance through avenues such as splash-screens are all an important part of the strategy of addressing user performance concerns, in addition to performance tuning. The reception your program receives will involve the emotions and reactions of your users as well as the ultimate speed of your application on their computers. Therefore, performance addresses subjective perception considerations as well as objective, measurable behavior.

The Challenge of Producing Meaningful Performance Assessments

Performance is typically evaluated by collecting timings. These timings may be collected from the application itself, or from test applications that highlight various characteristics of the target application. In some cases, comparisons may be performed between different code techniques to determine which of various alternative methods is preferable. There are many considerations to keep in mind when carrying out performance analysis that can complicate the task considerably.

Many Factors Affect Performance

Nothing would seem to be easier on the surface than choosing the speediest car at the conclusion of the Indy 500, or the fastest runner at the finish line of the Vermont 100-Mile Trail Run. Simply point to the contestant that reached the finish line first and is carrying the trophy around, and you can say "that one is the fastest!" Or can you...? If you call the same field of contestants back to the starting line and rerun the race, the results may be completely different. A safer statement to make about who's the fastest is to say "that one was the fastest in the last race!"

While computers are more predictable than highly tuned race cars and exhausted runners, in some respects the process of gauging performance of Visual Basic code shares the same outcome uncertainty. The results of a trial run can vary with successive trials, subject to the influence of the environment. And coming up with a fair race that is not slanted to any one competitor is a constant concern.

Everything from memory, to disk access speed, to system swap file configuration, to other programs running, to other Windows configuration issues, and potentially hundreds more issues both minor and major beyond just your application code can affect the speed of your application. The developer must identify any areas of major significance and isolate them out of the test as much as possible. Unfortunately, if all these factors were removed from the test, there would be no PC hardware or operating environment left to test on. Therefore, performance assessment includes understanding the impact of the environment and determining which factors should be eliminated or isolated before gathering timings.


Tip

One performance measurement is not conclusive.

Timing Methods Can Affect Performance Assessments

Even the method used to time various performance alternatives can affect the conclusions that are reached. Different methods of timing code execution are available to the programmer. The crudest of these is simply using a wristwatch. The human error inherent in this can introduce time skew ranging from one second to several seconds. Other code-based methods could take advantage of Visual Basic time-of-day functions, VB timer capabilities, or Windows API timer functions such as GetTickCount. The choice of a timing method can be very important. For example, the underlying time behind the VB Timer function is only updated every 55 milliseconds under Windows 3.1 and Windows 95, so this timing method cannot be used to measure brief duration time spans of a few milliseconds. Rather, tests should be structured to be of longer duration if Timer will be used for the timings. Instead of comparing assignment times of a textbox assignment versus a label assignment, you could time 1,000 textbox assignments versus 1,000 listbox assignments, and Timer then would have adequate resolution for the longer duration task. Alternatively, higher resolution timing methods can be used, but this requires considerable knowledge of the techniques. For more precise resolution, the timeGetTime API is available, which has default resolution of around 1 millisecond under Win95 and 5 milliseconds or above under Windows NT. The other well-known timing-based API, GetTickCount, does not offer this degree of resolution. It's actual resolution will vary between Windows 3.1, Windows 95, and Windows NT, and is essentially the same as using VBs Timer function under Windows 3.1. The bottom line is that a thorough understanding of the ramifications of the timing method used is required.

Fair Tests Are Required

Another key issue in comparison testing is that tests should be fair. You need to ensure that if you are comparing two code methods, they are truly similar methods for comparing the alternatives under consideration. For example, if you were deciding between the use of a textbox or a label, comparing the two code fragments in Listings 11.1 and 11.2 would accurately judge two potentially interchangeable methods.

TestCase #1:  where txtAuthors is defined as a textbox control

txtAuthors = "Keith Brophy and Tim Koets"
TestCase#2:  where lblAuthors is defined as a label control

lblAuthors = "Keith Brophy and Tim Koets"

On the other hand, the test shown in Listings 11.3 and 11.4 would be unfair, because it uses the most efficient textbox update method, but would not accurately reflect the most efficient label method available.

TestCase #1:  where txtAuthors is defined as a textbox control

txtAuthors = "Keith Brophy and Tim Koets"
TestCase#2:  where lblAuthors is defined as a label control

lblAuthors.caption = "Keith Brophy "

lblAuthors.caption = lblAuthors.caption & " and Tim Koets"

The Visual Basic Performance Paradox

The paradox of Visual Basic programming is that the very characteristics that allow it to be praised as a wonderful development language can also lead to it being criticized as a language with performance drawbacks. Visual Basic allows developers to piece together programs with tremendous ease and a minimum time investment. However, the same ease of use and rapid application development technology that makes that rapid assembly possible can introduce many difficulties from a performance and optimization standpoint.

Custom Control Layers

Visual Basic programs are often built with many custom controls. These custom controls consist of pre-packaged code available to the user in a consolidated library in OCX or VBX form. Controls are typically written in a generic manner to provide maximum flexibility in the widest variety of application uses. Any one application typically exercises only a subset of the properties and methods available, and uses only a fraction of the flexibility built into the control.

That means each use of a control may carry with it more of an automatic overhead penalty than if the user had built a customized control or designed a direct implementation from the ground up.

Ease of Technology Incorporation

The ease of component incorporation in Visual Basic also means that Visual Basic programs, as much or more than programs of any other language, are likely to leverage areas of technology beyond the programming language itself. The odds are very high that a Visual Basic program makes use of a database, mail support, multimedia, or generates graphs. In each of these cases, the performance perception of the user is based on the performance of the underlying technologies: database engine speed, network load impact, the mail interface and mail server, network speed, CD speed, the graphics engine, as well as on the application itself.

Interpreted Language

Visual Basic is an interpreted language. When a Visual Basic program is running, the instructions in the Visual Basic executable file are not carried out directly by the PC's processor. Rather, the instructions are passed on as data to yet another program, the Visual Basic runtime executive, that is started along with the Visual Basic application. Runtime executive file names are listed for various versions of Visual Basic:

Visual Basic 3.0

VBRUN300.DLL

Visual Basic 4.0 16 Bit

VB40016.DLL

Visual Basic 4.0 32 Bit

VB40032.DLL

A compiled language such as C, Pascal, or Delphi, stores executable programs as a sequence of instructions that can be directly carried out by the PC's processor without assistance from another runtime program, avoiding the overhead of the interpreter. It has been stated by many that the performance penalty for the interpreter is relatively small, because the Visual Basic runtime program performs its interpretation very quickly and much of the time in a program is actually spent waiting on areas such as system and database response rather than on raw processing of the application's instructions. Nevertheless, there is no denying that an extra level of activity must take place to execute the program.

High-Level, Layered Building-Block Tool

Windows resource management is complicated by the presence of a high-level interpreted language. In the traditional Windows language of C and C++, management of Windows resources is directly assessable through the use of Windows APIs. In Visual Basic, the developer still has capabilities to take advantage of this low-level management, but also must depend on the fact that Visual Basic interpreter is automatically carrying out its own level of resource and memory management in support of the forms and the code that have been defined. Forms are easy to define exactly because the developer does not have to deal with these low level issues. Visual Basic addresses these issues "out of sight, out of mind." Visual Basic is a programming language that makes the task of the developer wonderfully easy because it allows work to take place at a high level by moving the tedious work of programming out of the way to lower layers. The beauty of Visual Basic carries with it an unavoidable penalty—the typical Visual Basic developer is shielded from low level details by the layers, and it is often these low level details that affect performance.

Ease of Programming and Impact on Performance

There is still another penalty area of Visual Basic development that is usually conspicuously absent from many industry discussions on Visual Basic performance, but can in some cases be one of the most significant factors that influences performance. Here it is, stated for the record:

Visual Basic can be a sloppy language. It is easy to write sloppy programs in Visual Basic.

Whew! Now that the secret is out we can address the problem. For several reasons, inefficient code can be a serious problem in Visual Basic applications produced in certain development environments. At the top of the list is the fact that Basic is an easy language to begin programming in right away. Even those without prior knowledge of the language find the syntax quick to master with its clear identifiers (that is, End ends a program as one might expect) and lack of complexity (no pointers to explicitly deal with). However, this ease of use allows programmers to piece together programs rapidly in a "first-come, first-served" manner, using the first syntax that gets the job done without considering the ramifications.

Likewise, VB programs can be built in a piecemeal fashion. A snippet of code can be associated with one control, and later a snippet added for another control, with a variable thrown in here or there in the mix. This makes for an easy and painless development path as you add a piece of functionality at a time, but can lead to tangled, unorganized code in the end if planning and discipline are not used along the way.

Similarly, VB is the classic example of an "I'll do it later" language. It's so easy to implement functionality that it's natural for a programmer to add a quick and dirty piece of code. Such code is rarely optimized or elegantly written. Of course, the developer taking these shortcuts usually plans to revisit the code and implement it more elegantly in the future. But it is easy for the developer to forget (or choose to forget) to go back and optimize the code once a working program is in hand.

All the performance problems mentioned here are areas that can be avoided by planning, discipline, and up-front design. These problems don't just go away with experience, however. In the real world, there often remain situations where code must be done quick, dirty, and yesterday rather than today. The inevitable fallout from this mix is that there is often code that cries out for optimization.

Alternative Languages

Until very recently, in the minds of many seasoned programmers, there were two fundamental approaches to Visual Basic application performance problems.

Needless to say, there can be a great deal of work and inconvenience in either of these approaches, and they mitigate the advantages of using Visual Basic in the first place. The world will never know how many potential Visual Basic programs never reached fruition because of the purveyor of doom to be found in every company grumbling and overreacting "It'll never be fast enough in VB...we must rewrite it in C++!" This is not to say that there are never situations where another language alternative is warranted. However, in many cases there are optimization steps that can be taken short of turning to such drastic measures.

Visual Basic versus Other Languages

One issue that often arises in discussions of Visual Basic performance is that of C++ versus VB or Delphi versus VB. All too often, discussions on language performance issues lose sight of the forest for the trees as participants get bogged down in emotional views of particular languages based on their own subjective language experience. There have been industry demonstrations of highly optimized VB applications outperforming similar non-optimized C++ applications. When looked at objectively, most seasoned developers would agree that programs written in C or C++, unless special circumstances prevail, have faster performance than Visual Basic programs. However, more important than the issue of which language is faster is the question of "Can a Visual Basic program have acceptable performance to the user?" The answer is, in most cases, "Yes."

Industry Standard Advice—Write A DLL (Give Up!)

While guidelines on proper use of VB syntax and controls may be the Visual Basic performance battle cry, the white-flag wave surrender of "Write a DLL in another language!" often seems to follow closely on its heels. A program can often gain considerable speed by implementing key pieces in a block of performance-tuned, carefully crafted code that is packaged into Dynamic Link Library. This is especially true of compute-intensive pieces of code that must, for example, process many floating point numbers. Often these routines can be carried out much more efficiently in another language. However, writing a DLL in another language should be an avenue of last resort, not of first resort. Backing away from Visual Basic and into another language mitigates the very reasons that Visual Basic was used as the target language in the first place—ease of use and rapid development.

When to Optimize

Software optimization is as much of an art as a science. Optimization is not a set of rules and techniques that can be applied the same way to all applications with equally predictable results. While software developers have collected a body of general optimization techniques, the practical application of these techniques varies widely with the programming languages being used, the operating system and hardware platforms being targeted, and the end users' expectations of the software. Three key observations lend support to the assertion that software optimization is an art. The developer should understand them in order to appreciate the magnitude and scope of software optimization and its ramifications.


Tip

Software optimization is not just a set of techniques that can be applied at the end of software development.

If you wait until you have written all your code and then decide to optimize it, you are never likely to have a product with the highest possible efficiency. In order to properly optimize an application, you must optimize code, to some extent, as it is being written. When writing software, developers constantly face the temptation to take shortcuts in developing software. Once the actual coding effort begins, the developer is further tempted to simply get something working, and then worry about how efficient it is.

When using a language like Visual Basic, this temptation is especially alluring. One of Visual Basic's primary strengths is to make it easier for programmers to create applications, especially for those who may not have an extensive programming background. A significant penalty for this, however, is the equally high ability to write inefficient code. Thus, more of a burden is placed on the developer to write structured, maintainable, and efficient code.

When an application is put together hastily and without regard to performance, the developer often finds that when the program is finished, the performance is unacceptable to the user. The developer must then go back and figure out how to make the code more efficient. While it is possible for developers to spend too much time optimizing at early stages in software development, developers should take the time to write the code efficiently during development and not to just optimize it later.


Tip

Software optimization must be based on how users will use the application and on what kind of systems the application runs.

In order to better understand when to optimize, the developer should work closely with testers and users to see just how the application is being used. The developer may be unable to predict just how the user will use his application during a typical day, and may therefore completely overlook areas of the application the user will be dissatisfied with.

An important metric that the developer should understand up front is the multitude of hardware and operating system configurations the application may be run under. Each configuration may behave differently in terms of performance, and each one needs to be addressed separately. Often during development, the programmer fails to anticipate a unique type of hardware or operating system configuration the user may select. If this particular configuration bogs down performance and it is the only one the user wishes to use, he will be very unhappy with the program. That one configuration overlooked could cause much anxiety to you as the programmer, once you release the application.

The most effective way to overcome the failure to anticipate the user's pattern of usage with your application is to continually elicit tester feedback, and optimize and evaluate each major feature or component of functionality in the application. As the major components of functionality are built and integrated with one another, this evaluation of performance from the user's perspective must continue and broaden as the application broadens. Keep in mind the importance of optimizing in little ways as you develop and not overlooking minor housekeeping tasks to improve performance.


Tip

Optimization is not always appropriate.

Optimization often comes at the cost of increased difficulty in code maintenance, debugging, and reuse, as well as optimizations that compete with each other in the same application. Optimization sometimes takes code that is fairly easy to understand and makes it more complex or difficult to follow. This, in turn, can make the program more difficult to maintain, either for yourself or someone totally new to the code. While this can be minimized by careful documentation of the optimization stages, the trade-off does exist between making an application very efficient, yet with code that is very cryptic and difficult to understand.

Likewise, optimization often makes code difficult to debug. This trade-off goes hand in hand with maintenance, because code that is more difficult to understand is typically more error prone and more difficult to debug. If the programmer is careful to test the application in light of the applied optimization technique, his chances of having fewer errors are greatly improved.

Yet another area of compromise is in the area of code reuse. The goal in Visual Basic, as in any other language, is often to write reusable components of software so that they can be applied in a wide variety of projects. Optimization strategies often include techniques that can break down the modeling goals for code reuse, particularly the well-defined interface between modules. Care must be taken to preserve this interface as much as possible, avoiding any implied conditions for the programmer reusing the code.

Often performance is increased by taking certain assumptions into account, limiting the scope of the code's reusability. Herein lies the tradeoff. If certain assumptions are made, it is very important to document them thoroughly. While this can be an acceptable tradeoff, it is unwise to make too many assumptions and take out critical code such as error checking and validation. Your users will be much more upset with an efficient application that crashes than a less efficient application that runs correctly every time under every circumstance. The developer has to make the final decision by weighing the benefits of optimization with these competing factors that can make optimization prohibitive.

Where to Optimize

Optimization can be a time-consuming process, so knowing where to optimize is very important. A programmer can spend a great deal of time optimizing part of an application that is not critical to performance. Likewise, a programmer can overlook areas of code that are very critical to performance. If a developer does not have a good understanding of which parts of an application are critical to performance and which parts are not, he will be "shooting in the dark" when optimizing, and will be likely to optimize areas he thinks are critical with no quantitative justification for doing so.

When considering how Windows applications run, areas for improving performance can be classified into two main categories: actual speed and perceived speed. Actual speed represents how fast your program performs calculated operations, database access, form loads and unloads, graphical painting, and file I/O. Perceived speed represents how fast your application appears to run. These two categories provide an optimization framework that allows the programmer to break down performance tasks into these two areas, concentrating on each one as appropriate.

In order to increase the actual and perceived speeds of an application, the programmer must be intimately familiar with the application, starting with the code. Programmers can use the VBCP32 and VBCP16 profilers that come with Visual Basic, commercial profiling tools such as Avantis PinPoint, hand-inserted timings, or simple analysis to determine what segments of code are executed the most. As a general rule, one should spend the greatest amount of time optimizing the code that has the largest impact on the user, so knowing what segments of code get executed the most is critical.

In addition to understanding the code, the developer must also be aware of the amount of resources and memory the application uses. Because Visual Basic applications run in the Microsoft Windows environment, much goes on behind the scenes that has a critical impact on performance. Resource- and memory-monitoring tools can be used to find areas of excessive memory and resource usage that degrade performance.

As a result of carefully analyzing the code, memory, and resource usage in the application, the programmer will discover a collection of areas to optimize. Because time is almost always a scarce resource when developing applications, the potential optimization areas must be prioritized so that areas most critical to overall performance are addressed first, followed by less critical areas, in order of decreasing priority.

The first step in trying to make an application run faster is to see how long it takes the application to perform its tasks. Specifically, the developer must profile the code, checking to see how much time certain commands, subroutines, functions, procedures, operations, algorithms, events, and methods take. When profiling the code, it is essential to exercise all possible cases and conditions so that a realistic profile can be taken and no areas are overlooked.

One must be careful never to blindly assume the code that executes the longest must be inefficient. Just because a subroutine consumes a large amount of time in an application, for instance, does not necessarily mean it is inefficient. It simply means that the program spends most of its time in that subroutine, whether it gets carried out frequently and executes quickly, gets run infrequently but takes a long time, or somewhere in the middle. In order to determine this, the developer must go deeper.

Depending on the profiling approach you use, whether using a Windows API timing call, user perception, a commercial profiling tool, or some combination of these approaches, you should at least have some set of timings for every procedure in your application that is carried out frequently by the user or of key impact to them. The next step is to take each critical procedure and determine which ones the program spends the most amount of time executing. Those are the procedures you must pay close attention to, because they are the ones in which modifications most dramatically effect the application's overall performance.

Once you have ordered the procedures in terms of percentage of total user impact, you should take each procedure (starting with the one that gets the most execution time), break it down into components, and analyze each component separately, determining how long it takes each component to execute. The components may consist of a single line of code or modular groups of code. You will gain a good understanding of what each section is doing and how those sections individually contribute to the overall execution time.

Continue to break out sections of code within each procedure until the subsections are down to a low enough level to apply a clearly-defined optimization technique or set of techniques to that subsection. Again, the developer should maintain a prioritized list of the sections and subsections that need to be optimized based on percentage of time executed in the application.

Once every procedure and module significant to the performance of the application has been fleshed out, the developer must make a judgment call on what procedures and sections to deal with first.

How to Measure Performance

When measuring the amount of time it takes to execute a series of instructions, the resolution of the timer you use is very important. If the timer you use is not precise enough, your results will be inaccurate and probably not worth very much. The frequency that the timer "ticks down" is important. If a timer only updates once every 55 milliseconds, for example, the timing will be inaccurate if it takes 109 milliseconds for an operation to execute. The timer may only report that 55 milliseconds have elapsed, which introduces a 98% error!

Windows also makes getting accurate timings inherently difficult. This is due to the means by which Windows accomplishes tasks. Unlike their MS-DOS counterparts, which are procedure driven, programs written in Windows are event-driven. Procedure-driven code executes in a very predictable fashion, one line of code at a time, in a predetermined manner. The programmer is usually in full control of the environment. When the programmer decides it is time for the user to offer input, he allows the program to wait for the user to take an action such as entering data into a field. In Windows, however, the user can respond one of several ways, each of which is called an event. Each event can potentially occur in any order and must be accounted for by the software.

In order for Windows applications to be event-driven, the operating environment must inform the program what is happening. For example, when the user clicks on an OK button, Windows must tell the Visual Basic application that the click event has occurred for that button. Windows does this by passing a message to Visual Basic, telling it the OK button has been clicked on by the mouse. Visual Basic then triggers the cmdOK_Click() event, executing the code you place in that event subroutine.

Windows must rely on messages to respond to user actions, timers, screen drawing, operating system maintenance, and so on. The number of messages Windows receives at any one moment can fluctuate. At times, very few messages are being processed by Windows, and it spends most of its time waiting idly for the user to do something. At other times, the system is so busy that the user has to wait for Windows to take care of all its messages and background tasks. In addition to system messages, Windows often has background tasks that must be processed, such as updating disk caches or virtual memory, updating the screen, or handling network access.

The same issue of time variability can also happen when using a timer in your code. You may put the Timer statement at the beginning of a subroutine, and another one at the end. If the elapsed time during those two calls varies, it is often due to the fact that every time you execute the body of code, Windows has a particular set of background tasks to process, different memory management considerations, or, under certain conditions, Windows messages which must be immediately addressed. In other words, it may have more to do during one instance of the subroutine's execution than another.

Thus, depending on the state of the system, it may take more time in one instance of the execution than another instance simply because Windows happens to have more to take care of at the time the application is executing. Thus, you can often receive inconsistent timing values. As a programmer, you must accept the fact that Windows limits your control over the accuracy of the timings.

Various techniques can be used to take timings for measuring performance in applications. Perhaps the easiest way is to use the Visual Basic Timer function (see Listing 11.5). This is very easy to use, because it is a VB command and can be placed directly in code. This function is accurate to the nearest 55 milliseconds under Windows 3.1 and Windows 95. Resolution is significantly greater under Windows NT. The Timer call can be placed at the beginning of a series of code statements to be timed and again at the end, as in the following example. Then, the difference in reported times can be calculated in order to obtain the elapsed time for the operation or series of operations being timed. You can also rely on commercial profiling tools such as the VBCP32 and VBCP16 profilers packaged with Visual Basic, and Avanti's PinPoint, which help automate the process of inserting timing-gathering code and collecting and analyzing results.

Private Sub Command1_Click()

Dim rc%

Dim sngStart As Single

Dim sngEnd As Single

Dim lngIndex As Long

' The timing obtained below will only be accurate to the nearest 55 milliseconds

'    due to underlying constraints of the hardware timer interrupt dependencies

'    on Windows 3.1 and Windows 95 Systems

sngStart = Timer

'Time text box assignments

For lngIndex = 1 To 100000

    txtField1 = "Testing

Next lngIndex

sngEnd = Timer

rc% = MsgBox("Assignments took " & Int((sngEnd - sngStart) * 1000) & " milliseconds.")

End Sub

Visual Basic 4.0 Considerations 32-Bit versus 16-Bit Code

Visual Basic 4.0 can produce either 16-bit or 32-bit code when the corresponding development environment version is used. For example, 32-bit programs can only run in 32-bit Windows environments such as Win95 and NT, while 16-bit programs can run in either 32-bit environments or 16-bit environments such as Windows 3.1 or WFW 3.11. A consideration of Visual Basic program performance, then, also entails consideration of the 32-bit and 16-bit environments in which programs will run.

A 32-bit program has inherent performance advantages because it can take full advantage of the 32-bit operating system environment and APIs. There are many complex reasons for the superiority of the 32-bit environment, including the fact that programs are freed from the shackles of the 16-bit segmented memory architecture and that preemptive multitasking can take place.

The ability of the 32-bit Win95 operating system to carry out preemptive multitasking is very significant to performance measurement. First of all, it makes assessing performance much harder. In a cooperative multitasking environment such as Windows 3.1, programs have, generally speaking, full control of the processor when they execute. Once a process gains control, no other process gets a turn until the process underway completes or explicitly yields control of the CPU. This means that programs under Windows 3.1 can "hog" the CPU, locking out other applications until their own turn is complete. This often results in programs sprinkled with DoEvents statements that cause Visual Basic to temporarily give up or share the CPU.

From a performance assessment standpoint, obtaining meaningful timings can be somewhat easier under cooperative multitasking. If you look at a piece of code that doesn't yield the CPU, and then time that code segment, system activity may have affected the timing somewhat, but you can be confident no other application was given a slice of the processor time.

With Win95, however, the ground rules are different. No longer must an application use a DoEvents to be a good neighbor and give other processes a turn during lengthy calculations. The operating system takes care of that itself if needed, temporarily cutting one process off to give another a turn. The operating system makes these decisions based on scheduling algorithms that take many factors into account. The processes that are managed are no longer in the drivers seat for juggling their own CPU time once they start to execute. Therefore, when you time a given piece of code, even if that code does not yield the CPU, the operating system may give other processes turns as well before every instruction in the timed piece of code completes, which affects the duration of your timing. There are various timing approaches that can be taken to cope with this, but the most basic guidelines are simply to ensure you have no extra tasks running when assessing performance, and always base performance analysis decisions on a series of timings, and not just one individual timing.

The performance-related choice of which environments to target for an application, 32-bit or both 16- and 32-bit, also confronts the developer. The answer is dictated by your user base, but you can count on the fact that users with 32-bit operating systems will enjoy better performance if you can provide them with 32-bit applications rather than 16-bit applications. The exact degree of speed increase of an application when it is moved from the 16-bit world to the 32-bit world is extremely application specific. However, with this disclaimer in mind, tests on a variety of small applications show that performance improvements of 50 percent are commonly achieved when code is migrated from 16-bit to 32-bit VB programs.

This is enough of a performance gain to offer considerable incentive to developers to move their applications to 32-bit format. Fortunately, the 16-bit user base does not have to be abandoned to make this a reality. The conditional compilation capability of VB 4.0 makes it feasible to maintain the same program in both 32-bit and 16-bit versions from the very same source files. Programs can automatically be compiled with one set of declarations when targeted for the 16-bit environment, and another set of declarations when targeted for the 32-bit environment. This technique, illustrated below, means that you can bring enhanced performance to your high-end users without abandoning your 16-bit holdouts.

#If Win32 Then ' 32-bit VB uses this Declare.

     Declare Function GetTickCount Lib "kernel32" () As Long

     Declare Function GetWindowsDirectory Lib "kernel32" Alias _

     "GetWindowsDirectoryA" (ByVal lpBuffer As String, _

           ByVal nSize As Long) As Long

#Else   ' 16-bit VB uses this Declare.

     Declare Function GetTickCount Lib "User" () As Long

     Declare Function GetWindowsDirectory Lib "Kernel" _

      (ByVal lpBuffer As String, ByVal nSize As Integer) As Integer

#End If

There are significant performance differences between environments in terms of control loading as well. 32-bit Visual Basic load performance is dependent on the 32-bit OLE controls (sometimes referred to as OCXs) that it supports. 16-bit applications can make use of 16-bit OLE controls as well as 16-bit VBXs. A detailed discussion of this topic appears later in this chapter in the controls section.

Object Linking and Embedding (OLE)

Another very powerful technical capability of Visual Basic 4.0 is the level of integration that can be achieved with other applications through OLE, specifically embedding or linking data with the OLE container, direct insertion of OLE objects, or the use of OLE Automation Objects to control other applications. With easy access to other application objects from a VB application, and now the ability for a VB application to likewise make itself available as a server, the only limit to what a VB application can now achieve is the developer's initiative and creativity. This world of object sharing does impose some important performance con-siderations, however.

When you use the services of another application, you inherit their performance impact. OLE performance is much improved from early versions, but nevertheless, there is a lot more overhead that goes on to support OLE than if an application simply had internal code to achieve the same result with no external dependencies. It is incumbent on the developer to fully assess performance of the integrated piece of software before settling on it as an acceptable solution. You adopt that software object as your own when you incorporate it into your application, because from the standpoint of the user it is part of your application. Sometimes there will be nothing you can do about poor performance of an object driven through OLE automation. Many times, though, performance can be improved simply by gaining a better understanding of the objects methods and properties.

The same principle in reverse applies when providing services through exposed objects, OLE servers, and DLLs. If the code you develop is intended to be shared with other applications, there is an added need for performance tuning. Others will certainly be using the application, and they will be using it with additional overhead to get to the capabilities your external software provides. You should assess performance from this perspective with relevant test applications before considering any object complete and ready for public use.

In-Process versus Out-of-Process OLE Automation Servers

With the 32-bit version of Visual Basic 4.0, two types of OLE Automation Servers can be implemented: in-process and out-of-process OLE Servers. In-process servers are built from Visual Basic source code into DLLs, whereas an out-of-process server is built as an executable file. As the name implies, an in-process server can be run in the same address space as its calling application. An out-of-process server incurs the overhead of a separate address space and corresponding OLE communication issues. Therefore, the use of in-process servers can provide considerable performance benefits. This is another strength of 32-bit Visual Basic applications for which the 16-bit application has no equivalent alternative.

OLE Server Busy Timeouts and Perceived Performance

When you make use of another applications' exposed objects through OLE, you are counting on that OLE server application to be available to service your requests. However, if that application is already responding to the requests of yet another client application, your application must wait its turn. Likewise, if your application requests some action of the server that the server application is not able to fill because of its current state (for example, if it is already displaying a modal error message to an interactive user), the server will not be able to respond right at that time.

In both of these cases, Visual Basic automatically keeps trying to fulfill your request by continually retrying to give the request to the server, in hopes that the busy period is a temporary condition. Visual Basic continues these repeated efforts until a given timeout period has expired, and then it displays the "Server Busy" message to the user.

The retry duration is controlled by your application through the App.ServerBusyTimeout and App.RequestPendingTimeout properties. The ServerBusyTimeout property has a default of 10 seconds, and RequestPendingTimeout has a default of 5 seconds. While OLE communication establishment is not an area you can tune or control, this is an area that may be a perceived performance issue to your user because they are very aware that they are waiting on the software at these times.

The timeout properties raise an interesting performance question for the developer. Is it better to have faster feedback to the user with less likelihood that all operations will succeed, or to build-in longer wait times that the user may be subject too, with the payoff of more chance of OLE operations' success? Will your users' perception of the program be better if they periodically have to wait long periods of time but rarely have the application give them a busy (failure) message? If so, you may want to increase the timeout periods even more. Or, is it more likely that situations when the OLE server is tied up will be very infrequent? In this case perhaps your user would like immediate feedback in the very rare cases when the server is busy, rather than be subject to much longer waits to see the same "busy" error message? As you likely suspected, there are no stock answers to these questions. A timeout strategy that works well for one user on a blazing fast Pentium with little OLE server contention, may seem to impose insufferable waits on another user on a turtle-slow 386 where there is much contention for OLE services due to other software being used there. In general, the Visual Basic defaults serve as an effective starting point, but depending on your specific OLE server needs and target user base, you may need to tweak these to satisfy your user's robustness or acceptable wait time needs.

Objects and User-Defined Classes

Objects and the VB 4.0 capability to utilize user-defined classes from which objects can be created pose the traditional performance tradeoff. Objects are a powerful construct, and enable greatly enhanced development and maintenance. However, enhanced development often comes at the expense of speed. A very complex design based on hierarchical object implementation and built-on layers of collections and objects can have performance impact if the alternative was a flatter, simpler implementation that would require less interpretation time. The performance impact is very application specific, and in most cases the development benefits outweigh the performance penalties. The best way to assess this is to simply take measurements throughout the development lifecycle, so there are no surprises at the end of a complex project when performance is considered for the first time.

Another object consideration is to use the most specific means possible of declaring an object. The more specific the declaration, the faster performance is when that object is accessed. The following assignments give examples of slow and fast object declaration pairs:

Dim MyForm as New Form                    ' Slower

Dim MyForm as New frmPreviouslyDefined    ' Faster - uses existing form

Dim MyThing as New Object                 ' Slower

Dim MyThing as New MyObjectClass          ' Faster - uses class module def.

Dim MyControl as New Control              ' Slower

Dim MyControl as New Textboxt             '  Faster - uses control class

The improved performance between any one change as described above is relatively small, but it will apply to subsequent object accesses as well as just the definition statement. In an object-intensive program the cumulative benefit can be noticeable. There are no significant maintenance disadvantages to this approach, and the more specific declarations could even be regarded as better programming practice. Therefore, this approach should be used as standard practice.


Tip

For faster performance, always use the explicit class name rather than the generic one when declaring objects.

Collections of Objects

In addition to the ability to use objects, we can also manipulate them much more easily in Visual Basic 4.0. A very powerful capability of Visual Basic 4.0 is the means to create collections of objects, including collections of objects defined from user-defined classes specified in class modules. Managing objects is much easier when those objects are grouped into collections. Collections can have add methods applied, remove methods applied, and be indexed by item. There are many inherent advantages to these methods, such as the fact that when an item is removed from a collection with the remove method, the collection is automatically compressed. You don't have to worry about holes left in the array sequence from deletions and writing code to carry out corresponding shifts of data or Redims.

Collections do come at a performance cost that, as usual, will vary depending on the specific application. Tests were carried out on a simple user-defined class defined in a class module. A comparison was made of creating a collection of these objects versus an array of these objects. The array approach was noticeably faster. Then another test was carried out comparing modification of a given property for every object created. The array modification loop was dramatically faster than the collection loop, as reflected in the following table:

Object Creation in Collections versus Arrays—Normalized Results

Operation

Collection\ Array Ratio

Creation of All Instances

1.25—1.00

Object Modification in Collections vs. Arrays—Normalized Results

Operation

Collection\ Array Ratio

Modification of All Instances

15.00—1.00


Tip

Object manipulation is generally faster with collections than with arrays. For creation and deletion of objects, there is a performance penalty when collections are used in place of arrays. Despite this, the development/maintenance/good programming practice advantages of this approach are likely significant enough to warrant the use of collections in many cases.

Using Variables

The various data types which variables are based upon can have dramatic impacts on performance. Since some data types perform faster than others, strategies for using certain types of data types for certain uses and avoiding the general use of others is essential to making an application as efficient as possible. A brief overview of the most commonly used Visual Basic data types is discussed, followed by some general guidelines for using variables.

Visual Basic Data Types

The following lists the Visual Basic data types and their descriptions:

Integer—A variable that has been defined with the integer data type is a number that is two bytes in length. Because it is an integer, no fractional or decimal places can be used. Integers can range from —32,768 to 32,767. The integer is the most commonly used data type in Visual Basic. It is the standard data type for representing numbers for counting and representing quantities.

Long—Variables of data type long are also integers, except that they use four bytes of storage rather than two. This gives them a much higher range of —2,147,483,648 to 2,147,483,648. Because this data type is also an integer, decimal points cannot be used. If the range of an integer is not suitably represented in two bytes, it can most always be represented with four bytes. If the range exceeds even the long data type, a floating-point data type must be used.

Single—Some quantities or expressions cannot be easily represented without using a decimal point. Furthermore, sometimes programs require numbers with a much higher range than a long integer can provide. In either case, Visual Basic provides several floating-point data types. The first, the single data type, is a 4-byte value with a range of —3.4 ´ 10—38 to —1.4 ´ 10-45 and 1.4 ´ 10-45to +3.4 ´ 1038 . This data type is usually sufficient to represent most floating point values. While the single data type takes just as many bytes as the long integer, additional instructions are required by the computer to take the decimal place into account.

Double—When even larger floating-point values are required, the double data type can be used. A variable with this data type is eight bytes in length and has a range from —1.8  10308 to —4.9 ´ 10—324 and 4.9 ´ 10—324 to 1.8 ´ 10308. When the programmer needs to use floating-point numbers, this data type should be used only when this degree of precision or range is required. Because it takes eight bytes to represent a variable, it requires more memory and takes longer in calculations.

Currency—The currency data type is an 8-byte number with a fixed decimal point. This data type provides a range of ñ922,337,203,685,477.5808. The currency data type supports up to four digits to the right of the decimal point and up to 15 points to the left. This data type was designed for monetary calculations, because the decimal point is fixed. It does have more of a limited range than the floating-point numbers, but the smaller range makes it less susceptible to the small rounding errors that can occur with the other floating-point data types.

String—The string is a data type used to hold alphanumeric data. Strings are used extensively in most applications, because they are required to store and represent textual information. The length of a string variable depends on whether the string is declared fixed-length or variable-length. A variable-length string has four bytes of overhead along with the number of bytes it takes to represent the string (the number of characters). In the case of a fixed-length string, the user specifies the number of bytes for the string when it is declared, and no overhead bytes are required.

Variant—The variant is a "catch-all" variable that can represent any data type and automatically converts between them when necessary. Unfortunately, the variant adds a great deal of processing overhead and significantly increases the amount of time and memory used in an application. A variant used to represent a variable-length string, for example, takes 16 bytes plus one per character in the string. If a variant is used to represent a number, it requires 16 bytes of memory as opposed to 2 for an integer, 4 for a long or single, and 8 for a double.

Constants—While the Visual Basic constant is not, strictly speaking, a data type, it is mentioned here since it is yet another way to represent data. Constants can be used to represent numeric values and strings. While constants still require the same amount of memory to be represented in an application, they are loaded differently into memory and can help make an application more efficient.

Use Strings Instead of Variants

To compare the use of fixed-length strings, variable-length strings and variants, an application was constructed that assigns text to variants, variable-length strings, and fixed-length strings. The strings are declared and assigned as shown in Listing 11.6.

' Create an array of variable-length strings

ReDim svTest(1 To intStringCount) As String

' Assign the variables

For iCount = 1 to intStringCount

     svTest(iCount) = "Test String"

Next

The same type of test was carried out for fixed-length and variant strings. Timings were taken for each variable assignment separately, and the results obtained are shown in Table 11.1.

Data Type


Elapsed Time


Fixed-Length String

1.00

Variable-Length String

4.50

Variant

8.00

Note that the use of fixed-length strings provides the fastest execution time. This is to be expected, because Visual Basic knows the size of the string ahead of time and doesn't need to calculate a size as it does with the variable-length string or variant. It is not always possible or desirable, however, to use a fixed-length string, so a variable-length string must be used. Even so, both are considerably faster than using a variant.


Tip

Use strings instead of variants.

Variants require more time due to the overhead required in handling them. Thus, the use of variants to represent strings should be avoided whenever possible. In specific cases, variants must be used to represent dates or can be used to easily convert between data types. Their use, however, should be restricted and only used when absolutely necessary.

Use Integers Whenever Possible

As discussed earlier, integers use fewer bytes than any other numeric data type. Use integer variables whenever possible rather than floating-point data types or variants. To illustrate this principle, a test application was constructed which assigns an integer value to each data type. A simplified code listing of this process is shown in Listing 11.7.

' Variable Definitions

Dim intTest As Integer

' Perform the assignments

intTest = 33

A similar test was carried out for longs, singles, doubles, and variants. The results of each individual timing can be seen in Table 11.2.

Data Type


Elapsed Time


Integer

1.00

Long

1.12

Single

2.36

Double

2.40

Variant

1.24

Notice that the integer assignment has the fastest execution time, followed closely by the long data type. The floating-point data types both require approximately twice as much execution time due to the additional processing required to take into account the decimal-point and increased range capabilities of the data types.

The variant takes approximately 25 percent more time for the assignment than the integer and about 10 percent more than the long integer. While one might expect the variant to take the longest time of all the data types, Visual Basic treats the variant like an integer because the number assigned to it can be represented most efficiently by typecasting it into an integer. In doing so, the execution time is very close to that of an integer. Additional time is still required, however, for the overhead required in typecasting the variant into an integer. This once again supports the conclusion that the variant should be avoided unless necessary.

When using floating-point data types, the variables do not require as much space as the variant does, but the execution time is much more significant. The variant provides easy and automatic conversion between data types, but at the cost of memory. The floating-point data types provide decimal-point capability without the extra baggage of the variant's memory consumption, but they require much more time to execute than integers.


Tip

Use integers whenever possible.

Using the Currency Data Type

The currency data type is a special data type designed for monetary calculations. An application was constructed that performed a simple multiplication of dollar amounts using the currency data type as well as all the other data types. The results of the timings are shown in Table 11.3.

Data Type


Elapsed Time


Currency

1.63

Integer

1.00

Long

1.85

Single

1.04

Double

1.09

Variant

36.96


Tip

Use alternatives to the currency data type.

As can be seen, the use of the currency data type is slower than its floating-point counterparts. Note the dramatic increase in execution time of the variant data type. This again underscores the importance of avoiding this data type, particularly for floating-point and currency operations. Note further that the integer data types can be used to obtain even better performance. Integers cannot always be used for currency math, because currency math often uses floating-point arithmetic.


Note

As the book Visual Basic 4 Performance Tuning and Optimization points out, alternate implementations are possible which enable the programmer to only use integer data types in place of floating point arithmetic, making the execution time of the application considerably faster.

Code Structure

Code structure can have a direct impact on speed. There are cases where the considerations for structured code based on "good programming practice" might not be the same as for structuring for "maximum speed." The structure of code effects applications in ways you may anticipate, and in ways you may not anticipate. How you package your code does make a difference, as reflected in the observations that follow.

The first observation is one that is, perhaps, intuitively obvious to many programmers: in-line code is faster than procedure calls. In-line code refers to placing statements that must be carried out directly in the code path rather than packaging them in a subroutine or function call. The reason for the speed is apparent—if statements are simply part of the current logic flow, no extra call overhead is incurred when the statements are carried out. If those statements are placed in a separate procedure and must be reached by a call, more low level machine instructions must be executed simply to change the flow of code from the current sequence of statements to the procedure's statements which are at a different address in memory. Accordingly, additional extra overhead is expended at the end of the procedure call to return back to the original flow of statements.

However, the modular maintenance advantages and clarity provided from packaging code in procedures are regarded as good standard programming practices. The value of in-line code must then be considered with respect to the trade-off with procedure maintainability. If a block of in-line code is used 20 different places in the code, and a bug is found in this block, a change must be made in 20 places. This is opposed to the one location where a change is needed if a procedure had been used instead of in-line code.

What type of performance advantages does VB in-line code really provide to offset these maintenance penalties? A simple timed test was carried out to provide answers to this question. The basic concept behind the test program can be observed in Listing 11.8. The same code sequence shown in this listing was packaged four different ways: in-line code, function call within the same module, subroutine call within the same module, and function call in a different module. Then a separate timing was recorded for each of the four different packaging approaches.

For i& = 1 To txtLoops

' The sample algorithm below serves as a simple

' example of an algorithm which takes two initial values,

' performs calculations on them, and modifies a global variable

' with the product of the results.

    dblInput1 = START_VALUE1

    dblInput2 = START_VALUE2

    dblInput1 = dblInput1 * 2

    dblInput2 = dblInput2 * 3

    g_dblAnswer = dblInput1 * dblInput2

Next i&

This sequence was tested in each of the four scenarios with precise timing methods, and the following results were recorded:

Scenario


Normalized Time


In-line code

1.00

Function call (function in same module as call)

2.00

Subroutine call (subroutine in same module as call)

1.65

Subroutine call (sub in different module than call)

1.66

The in-line code, not surprisingly, executes the fastest of any of our tests. The next quickest procedure-based approach took 65 percent longer than the in-line method. Clearly, there are significant performance benefits that can be realized from in-line code if the code being examined makes heavy use of calls. This leads us to the following observation:


Tip

In-line code is significantly faster than procedure calls.

The subroutine call test (same module) is faster than the function call test (same module) by a margin that is potentially non-trivial if many function calls are made within one program and the cumulative impact on performance is great. A function call incurs the overhead of assigning its return value to the calling argument, while our subroutine call simply assigned the value to the parameter passed by reference. Of course, the exact comparison depends on the type of data returned, and whether the subroutine data is passed by reference or value. In the tests carried out here, all parameters were passed by value. The trend is significant enough to bring us to another observation.


Tip

Subroutines are generally somewhat faster than function calls because of efficiencies in returning data.

Notice that in our small test the subroutine call time is much the same regardless of whether the subroutine is defined in the same module as the call or a different module. Even though this module must be loaded into memory, whereas the form-defined subroutine was already in memory, the time impact difference was small. However, if you are on a memory constrained system or the module to load was very big, the impact could be more significant. A procedure in a module is loaded on-demand if that module has not already been loaded, so the code for the module must be pulled into memory when the call is made. The form, however, is already in memory along with its code, so it does not incur the same penalty. With small programs on systems without memory constraints, the impact is not likely to be significant. For large programs, there are economies of scale to grouping related procedures into one module so the whole collection can be loaded at one time.


Tip

There are advantages to grouping similar procedures into one module, but the performance savings may range from significant to barely noticeable depending on the program and environment.

So far our attention has focused on different types of procedure call alternatives. However, even within the same type of call, factors can affect performance. Of course the amount of data passed to a procedure will have an effect on performance: The more data passed via parameters, the greater the performance impact. However, it is not just the type and quantity of data passed that affects speed. Another often overlooked factor is the issue of how the data is passed into parameters.

Data can be passed by reference or by value. If the ByVal keyword is not provided in the declaration, the data is passed by reference for a given parameter. That means that variables passed in to the procedure and changed within the procedure will reflect the updated values back in the procedure which made the call. On the other hand, if the ByVal keyword is specified on a parameter declaration, that parameter is passed simply by value. In effect, when the procedure is called, VB looks at the master memory location and provides a copy of that to the called procedure. Then when a parameter is updated in that called procedure, only the local copy is altered, and the original variable supplied as the parameter when the call is made will be left untouched when the original flow of statements resumes after the call completes.

The explicit use of ByVal with variables that should not be altered when passed as parameters is a good programming practice and can prevent many unintended side-effect errors. Unfortunately, like many good programming practices, it comes at the cost of a slight performance penalty. Calls made with the ByVal keyword, whether function or subroutine calls, will be slower to a small extent.

A simple test program provided a look at the difference that can result. This program timed four types of procedure calls: a function call that passed parameters with ByVal; an identical function call without ByVal; a subroutine call that passed parameters with ByVal; and the identical subroutine call without ByVal.

Test Results


Test


Normalized Timing


Function Call ByVal Test

1.11

Function Call No ByVal Test

1.05

Subroutine Call ByVal Test

1.04

Subroutine Call No ByVal Test

1.00

A time cost of roughly 4 percent results from passing the two parameters by value instead of by reference for functions and subroutines. The impact of this should be carefully considered, however, before deciding to apply optimizations to this area. ByVal is slower, but 4 percent is not a big performance penalty. The cumulative effect could be perhaps noticeable to a user if an application makes many repetitive procedure calls. Likewise, if procedure calls use many parameters, the savings to be gained would be expected to be greater than 5 percent. However, if ByVals are eliminated to gain speed, this comes at the cost of good programming practice. Like most performance decisions, this one is not cut and dried, and will depend very much on the nature of your specific application.


Tip

Calls using parameters passed by reference are slightly faster than parameters passed by value (declared with ByVal).

Execution Time In and Out of Development Environment

So far the timings we have examined have come outside of the Visual Basic development environment. What if we were to carry out the same tests within the development environment? This is the mode many developers carry out performance testing in, because it is more convenient to modify source code and run a quick trial right in the environment than to generate an .exe and execute it. However, this is not the correct way to assess VB performance. Users will not be running their programs in this environment. If performance analysis is performed in this environment, our results will be skewed and not representative of the performance issues facing the user who simply runs his .exe file. A test was carried out to determine how big of a skew the VB development environment introduces to performance assessments.

To carry out this test, the same ByVal tests covered earlier were simply repeated inside the VB development environment. The execution times for these particular tests increased significantly, with slowdowns ranging from 26 percent to 31 percent. One reason for this slowdown is that when we have the VB environment running, there is another program loaded by Windows and active at the time the program runs. Some insight into the memory layout of Visual Basic, as provided at the start of this chapter, provides further understanding. Visual Basic maintains symbol tables in data segments in the development environment that are not present in the final executable. When your program runs in the development environment, VB has stored the text name representations of your variables and constants in this symbol table. This is, in effect, optimized out when you produce your .exe. The Visual Basic development environment, in conjunction with Windows, is doing more work to make your program run than would be carried out by the interpreter and Windows if it was already in P-code, or executable, format.

On the other hand, you may have correctly perceived in the past that your programs sometimes seem to load faster when run from the VB development environment. When you're in the development environment, the OCXs and VBXs required by your project are already loaded and do not have to be freshly loaded from disk when your program is started. All of these considerations add up to one important bottom line—don't gauge the performance your users will be faced with when in the VB environment.


Tip

Aside from load time, performance will be generally faster outside the VB environment.

Control Loading

Many a Visual Basic programmer has turned to his or her users after they watch the 15-second load time of the program with dismay, and said, "There's nothing I can do about that, that's just how fast the custom controls load!" Is the programmer really helpless to do anything about the speed of custom control loads? And for that matter, how much of a performance penalty is inherent in the loading of custom controls? Program load time is the time required to complete all load activity. To assess such activity, it is important to accurately measure starting immediately prior to load startup initialization and concluding on the last statement of the form load event. Program initialization includes the load phase where Windows loads your executable, the runtime interpreter, and carries out activity such as VBX and OCX file loading and initialization into memory. The VB run-time interpreter is also part of load time, and is loaded into memory before your first program statement is carried out. Taking a measurement within the program to encompass this Windows startup activity can pose something of a problem because it is hard to have a program time itself when you must begin measurement of time before the program even has control. Therefore, a special load utility or manual timing methods must be used to gather true load time that encompasses a program's entire load span from immediately prior to launch through the last statement in the load event. The key concept to be aware of when assessing form loads is that there is a lot more going on during a program load than just the code in your first Form_Load event.

When a form is loaded, every custom control used is loaded into memory from its OCX or VBX file. This means the more total custom controls you use, the more time is spent loading from files. The ramifications are clear.


Tip

If you can reduce the total number of custom controls your application uses, you will reduce form load time.

To test this, a test program with many custom controls was built. The program had no code that was carried, but was simply a container for controls. Despite doing "no real code work," the required load was significant. Simply eliminating controls drastically sped up the application, as expected. An immediate benefit is gained by each control eliminated, and if several are eliminated, the cumulative savings are very significant, with load time decreases of 90 percent easily reached in such trials.

Another activity that takes place during load time is that initializations for any standard controls you have defined are carried out by the interpreter. However, standard controls do not have to be loaded from a separate OCX or VBX file, because they are along for the ride with Visual Basic executable. In this sense, standard controls have an innate performance advantage over custom controls in separate files, which leads us to our next observation.


Tip

To enhance performance, use standard controls in lieu of custom controls, where possible. Standard controls load faster due to reduced disk access at load time.

In many cases it may not be possible or desirable to use a standard control alternative for a custom control. Custom controls provide many rich features and functional advantages, but if your motivation is sheer performance, you would be wise to use a standard control rather than its three dimensional, multimedia, fireworks color-generating alternative.

It is to be expected that some controls will load faster than others. After all, each control is based on different underlying code with different resource requirements. One of the challenges of evaluating the performance of your applications then becomes determining which controls load faster than others.

Often this must be considered on a case-by-case basis, but one generalization can be made about the standard VB controls. Lightweight controls load faster than heavyweight controls. A lightweight control is one that carries less baggage than a heavyweight control. Specifically, a lightweight control does not have a window handle and associated hWnd property, because it is not created as a window in the system, as many controls are. Therefore, there is less overhead in creating it and it requires fewer resources. Lightweight controls include the line, shape, label, and image control. The image control is perhaps the most significant in this group because these controls can be used in place of the picture control to contain bitmaps.

A test program was carried out to measure the difference. In the first test case, heavyweight picture controls loaded by the program are defined at design-time. In other words, they were placed on the form in design mode and are automatically loaded when the form loads. This program loaded in slightly over 300 milliseconds in repeated trials.

The next step was to compare this load time with that of an equivalent program based on the lightweight image controls. This program was essentially identical to the picture control load program, except that the controls used were image controls. This program consisted of a form that had the image controls placed on it at design-time just as the picture control program did. This image control program loaded in just over 200 milliseconds in a series of trials.

The lightweight control program was roughly 100 milliseconds (33 percent) faster than the heavyweight picture control program. Windows and the VB interpreter carry out less work to create the non-window image control than they do to create the picture control which has a window handle, among other resource requirements. Thus, a clear observation emerges.


Tip

Lightweight controls are significantly faster than heavyweight controls.

Because lightweight controls load faster than heavyweight controls, you might think the best you can do to optimize form loading is to simply use lightweight controls when you lay out your form. However, there is yet another optimization step that can be used if the situation warrants. That is to dynamically create multiple instances of an image control during the form load rather than to create them at design-time. One method to create a form with fifteen image controls, for example, is to lay out all fifteen image controls at design-time. Another approach could alternatively be taken to generate the same form. Only one image control is created at design-time, and that is indicated to be a control array by setting the image control index property to 0. Then for this form with its one control, code can be written in the form load event to load additional instances of that control into a control array at load time, by using the load statement.

This test was carried out, comparing the first method to the second. When controls are dynamically created at load time in this manner, the visible property must be set to true for the control to be visible, and the correct top and left properties must be specified for this newly created control. Even with the additional code overhead to loop through the control array, carry out the load statements, and perform the left and top initializations, this program performed faster than the counterpart that had all its image controls laid out at design-time. The code that declared additional instances of the image control by using the load statement performed approximately 10 percent faster than the program that had all the image controls pre-declared. Although this is not an enormous time savings, the speed increase could be enough to be noticed by the user in some cases. The more controls on the form, the bigger the benefit to be reaped. However, this approach does make for more code which, in turn, poses more maintenance challenges and leaves more room for bugs. Nevertheless, this observation is one to keep in mind when considering ways to speed up your program if you use many instances of the same type of control.


Tip

Runtime control array created image and picture controls load faster than design-time created image and picture controls.

There is yet another efficiency introduced by the use of dynamically loaded controls through a control array. Such controls can also be removed if they're ever needed through the unload statement. In certain cases, this allows for more efficient programs that can reduce their memory consumption even when a form is loaded by unloading particular controls.

Comparing Control Loading Between 16-Bit and 32-Bit Visual Basic Applications

32-bit Visual Basic 4.0 supports 32-bit OLE controls (commonly referred to as OCXs). Thus, its control load activity carried out is different than that of 16-bit Visual Basic 4.0, which supports both 16-bit OLE controls and VBXs (all 16 bit). Visual Basic 3.0, of course, supported only VBX controls. One other difference between applications produced in these different versions is that the application executable size will differ as well. A series of programs was converted from a Visual Basic 3.0 VBX implementation, to a Visual Basic 4.0 16-bit VBX implementation, to a Visual Basic 4.0 16-bit OLE control implementation, to a 32-bit OLE control implementation by simply changing the controls, with no code changes. The size of executables generally changed as follows:

Environment


Normalized Sizes


Visual Basic 3.0 VBX

1.00

Visual Basic 4.0 VBX

1.15

Visual Basic 4.0 16-Bit OLE Control

1.15

Visual Basic 4.0 32-Bit OLE Control

1.20

Most of the increased size of VB4 executables is simply from the switch between versions of Visual Basic. There seems to be relatively little difference between VBX and 16-bit OLE controls in terms of impact on executable size. There is a more noticeable, but still slight, difference between 16-bit and 32-bit OLE controls and their impact on executable size. It is also significant to note that the size of the run-time executive itself varies between Visual Basic 3.0, 16-bit Visual Basic 4.0, and 32-bit Visual Basic 4.0. Program size is, of course, one factor that affects load speed, but not necessarily the most significant factor.

Run-Time Executive


Normalized Sizes


Visual Basic 3.0 VBRUN300.DLL

1.00

Visual Basic 4.0 VB40016.DLL

1.77

Visual Basic 4.0 VB40032.DLL

2.31

Note: Numbers in preceding were derived from a preliminary version of Visual Basic 4.0.

As for load times themselves, general findings held consistent between all versions. In other words, it was always true that dynamically created picture controls out-performed design-time picture controls with respect to loading, that picture controls loaded faster than image controls, and that custom controls, whether VBX or OCX, introduced noticeable load delay with each additional control included.

In the test applications that were examined, 16-bit VBX programs loaded in roughly the same amount of time as similar programs based on 16-bit OCX programs. In some cases, 16-bit OCX programs did load significantly faster than the same programs using 32-bit OCXs, particularly in cases where a very large number and variety of controls was used. Standard controls showed a slight trend to load faster in 16-bit versions than in 32-bit applications, but the difference was not statistically significant. Visual Basic 3.0 programs generally load more quickly than do 4.0 programs due to less OLE overhead.The results will be different for every application, so the information summarized here should be viewed as one set of test data and not a sweeping statement about all control load times. However, it does point out that load time considerations are very much present on Visual Basic 4.0, and not eliminated by any means in the32-bit world. The fewer controls you have, the faster your programs will load. The more controls you have, the more overhead that takes place at load time, and the slower the performance for your end user.

Control Properties

Working with control characteristics involves using control methods and setting control properties. Most VB developers are aware of that. However, many new VB developers are not aware that controls can have default settings. Even those developers who are aware that default settings exist often think of them as little more than a code convenience. However, the following means of making an assignment based on a control property setting are not equivalent:

(a)

strVariableA = Label

(b)

strVariableA = Label.Caption

These statements achieve the same ultimate action, but in different ways. The code path traveled for each of the two assignments is different. Therefore, performance is different for the two methods. A simple test program was used to measure this difference. A timing was made of a sequence of code that used default property values to set a checkbox value, label caption, and textbox text. Then a timing was made of a similar sequence of code that used explicit property references (in lieu of the defaults) to set a checkbox value, label caption, and textbox text. The results clearly illustrated that assignments made to default values are faster for Visual Basic 3.0. The explicit property assignments take roughly 25 percent longer than the default assignments. However, under Visual Basic 4.0 performance of these two techniques is virtually identical. This leads to an important observation:


Tip

Use control default properties rather than explicitly naming those properties to improve performance under Visual Basic 3.0. This optimization step does not have as much significance under Visual Basic 4.0, however.

Control properties provide a convenient way of storing data. As we have seen, these values can be used in assignments much like variables. The natural question that follows is can variables offer better performance, because they are not encumbered by control overhead? A test was carried out to address this by carrying out a repeated loop of data assignments and additions based on variables. Property values were cached to variables at the start of the test. A similar test was carried out in which control properties were used directly in the same statements in place of variables, and therefore no initial assignment was required.

Therefore, the variable-based code sequence actually consisted of more lines of code to accommodate the variable initialization from the control properties. In spite of this, the property update test took over 20 times as long as the variable-based test. This test shows the value of storing control property values in variables whenever they will have to be repeatedly referenced and leads to the following observation:


Tip

Store control properties in variables if you will reference those properties frequently in code.

This has the familiar maintenance drawback of requiring additional development time and introducing more code to debug and maintain. On the other hand, the performance payback from any type of program that has to frequently reuse the same property value is likely to be of enough significance to show a noticeable performance improvement to the user.

Math

This section discusses how to use variables efficiently when using them in mathematical operations. A simple program was written that carries out mathematical operations using each data type along with every arithmetic operator. In the case of the integer data type, for example, a simplified code listing of the calculations can be seen in Listing 11.9.

' Declare the variables

Dim X As Integer

' Assign the variables

X = 2 : Y = 3

' Perform the calculations

Z = X + Y

Z = X - Y

Z = X * Y

Z = X / Y

Z = X \ Y

Z = X ^ Y

Z = X Mod Y

Each arithmetic operation shown here was timed for each data type. The timing results for the integer, long, and single and double data types along with the arithmetic operations applied to them are summarized in Table 11.4

Data Type


+


-


*


/


\


^


Mod


Integer

1.01

1.00

1.04

2.51

1.04

3.18

1.07

Long

1.06

1.06

1.37

2.61

1.53

3.19

1.50

Single

1.32

1.30

1.30

1.45

3.32

2.16

3.31

Double

1.32

1.33

1.32

1.47

3.33

2.18

3.24

These results will be used to support the conclusions that will follow regarding arithmetic operators and the data types used with them.

Integer Math is Faster than Floating-Point Math

In most cases, integer math is faster than floating-point math. This should make sense intuitively, because the computer does not have to worry about decimal points when using integers and the range of an integer is much less than that of a floating-point value. One would expect, therefore, that a mathematical operation to be carried out on an integer would require less time than a floating-point value.

Refer back to Table 11.4 for the comparison between integer and floating-point data types. Notice that, in every case except floating-point division and the exponent operators, the integer data types out-perform their floating point counterparts. Notice that integer math is faster than floating-point math by approximately 28 percent for addition, subtraction, and multiplication. The modulus operator using the integer and long data types is faster by an average of over 150 percent. These results prove that, when performing most mathematical operations on data, if you can represent that data with the integer and long data types, the performance of your code improves. Performance improves particularly when you apply the same mathematical operation over and over, such as in a loop. While you may not obtain a noticeable speed increase by replacing one floating-point subtraction with an integer subtraction, you certainly may if that operation is contained in a loop that executes one hundred times.


Tip

Integer math is faster than floating-point math.

As you can see from the table, there are two exceptions to the guideline that integer math is faster than floating-point math. These two exceptions occur when using the floating-point division and exponent operators. Each of these exceptions are significant and are discussed later.

Do Not Use Floating-Point Division with Integers

Floating-point division (represented by the "/" symbol) is different from integer division (represented by the "\" symbol). Floating-point division returns a floating-point result regardless of the data type of the two numbers being divided. Integer division, on the other hand, rounds floating-point variables being divided into integers (unless they are already integers, in which case this is not necessary) and returns an integer result. If the numbers being divided do not divide evenly, the fractional portion of the result is truncated to ensure the result is an integer. The integer result may be of data type integer or long, depending on the range of the result.

If integers are used in a floating-point division, the values are treated as floating-point values during the division. The result is a floating-point value (single or double depending on which data type is better suited for the solution). If, however, the result is stored in an integer variable, the result must be converted back into integer form. The work of converting the two integer variables to be divided into floating-point form for division and back into integer form for the result makes the cumulative time of the entire operation much slower than if floating-point variables were to be used.

Integer division, on the other hand, takes whatever variables are being divided and converts them into integer format. The variables are essentially rounded to integer numbers and considered integers or longs, depending on the range of the data. The division is an integer division, and the result is an integer or long, again depending on the range of the result and the most appropriate data type to use for it. If floating-point values are used with the integer division operator, the amount of time needed to round and convert the variables into integer form and then convert the result into a floating-point representation takes much more time than if floating-point division were used with floating-point variables.

Thus, whenever you divide two integer values and desire an integer result, use the integer division operator ("\") and do not use the floating-point division operator ("/"). If, on the other hand, you have two floating-point values and wish a floating-point result, do not use the integer division operator. In Table 11.5, the comparison between integer division using the integer data type versus floating-point division using the floating point data type can be seen.

Data Type


Floating-Point Division


Integer Division


Integer

2.40

1.00

Long

2.50

1.47

Single

1.39

3.18

Double

1.41

3.19

Notice that integer division using the integer data type is faster than floating-point division using either the single or double data type (1.00 versus 1.39 and 1.41 normalized performance, respectively). Integer division using the long data type takes slightly longer than floating-point division using the single or double, but only very slightly (1.47 versus 1.39 and 1.41 normalized performance). Finally, integer division using floating-point data types take much longer than integer data types performing integer division. The most common oversight for most programmers is to use the floating point division operator with integer variables. Notice that doing so incurs a needless penalty of over 140% and 70% for integers and longs, respectively.


Tip

Do not use floating-point division with integers.

Do Not Use Integers with the Exponent Operator

The other exception to the guideline that states that integer math is faster than floating-point math is the use of the exponent (^) operator. The exponent operator calculates a result based on the following formula:

result = number ^ exponent

where number can be any data type, exponent can be any data type (except that exponent must be an integer if number is negative), and result is of type double or variant. It is very important to notice that the result defaults to the double data type. Even if number and exponent are both integers, the result is still converted into the floating-point, double data type.

The fact that the exponent operator always produces a floating-point result indicates that the operation is inherently a floating-point operation. Even if integers are used for the number and exponent, the processor still carries out floating-point math. This means that if the number and exponent are integers, they must be converted into floating-point form before the calculation can be carried out. If the number and exponent are already in floating-point form, the operation is faster because no conversions are necessary. Furthermore, if the result variable is an integer data type, an additional conversion must be performed to round the floating-point number into an integer value. A simple application was written that applies an exponent to a number for each data type. The timing results are shown in Table 11.6.

Number


Exponent


Result


Relative Performance


Integer

Integer

Double

1.47

Long

Long

Double

1.48

Single

Single

Double

1.00

Double

Single

Double

1.01

Note that the integer and long data types take approximately the same amount of time to execute, as do the single and double data types. From these data, we can conclude that when performing operations with the exponent operator, it is better to use floating-point variables for the number, exponent, and result in order to increase performance. It turns out that the exponent operator is one of the more time-consuming arithmetic operators in the set.


Tip

Avoid integers with the exponent operator.

Avoid Variant Math

As we have already seen, using variants typically slows down an application due to the overhead required as a result of their flexibility. One can expect that variants will typically take more time in execution than the other data types.

Consider a simple arithmetic operation between two variants, X and Y, who are assigned integer values. Table 11.7 shows what data types are returned from each operation between two variants that are represented as integers.

Operation


X


Y


Z


Addition

Int

+

Int

=

Int

Subtraction

Int

-

Int

=

Int

Multiplication

Int

*

Int

=

Int

FP Division

Int

/

Int

=

Double

Int Division

Int

\

Int

=

Long

Exponent

Int

^

Int

=

Double

Module

Int

Mod

Int

=

Long

As you can see, the variants being added are treated as integers (that is, the integer or long data type) as well as the result variant Z, with the exception of the floating-point division and the exponent operators. One would expect, therefore, the timings for each of the operations to be equivalent to the integer data type. If you observe Table 11.8, however, you notice that the execution times are significantly greater than the integer data type where we would expect them to be close.

Data Type


+


-


*


/


\


^


Mod


Integer

1.00

1.00

1.00

1.00

1.00

1.00

1.00

Variant

1.37

1.38

1.37

1.06

1.93

1.09

1.85

As you can see from the table, in every case where we would expect the performance of the variant to approximate that of the integer, we encounter a substantial percentage increase in execution time. For addition, subtraction, and multiplication we see almost 40 percent more time required. But for integer division and the mod operators, the percentages jump way up to 93% and 85%, respectively. Floating-point division and the exponent operators for the integer data type do not result in much difference, but this can be accounted for due to the fact that, in both cases, the variables must be converted to floating-point and then back to integer. The time required for these conversions applies equally to the variant and both integer data types and may explain why those execution times are more equal. Thus, when performing mathematical operations, it is always advisable to use the integer and floating-point data types and avoid the variant.


Tip

Avoid variant math.

Multiplication Is Faster than Division

Now that data types have been discussed, you now focus on the arithmetic operators and mathematical functions themselves. This section compares multiplication to division regardless of data type. Table 11.9 summarizes the results of the multiplication and division operations for each data type.

Data Type


Multiplication


FP Division


IntDivision


Integer

1.00

2.40

1.00

Long

1.31

2.50

1.47

Single

1.25

1.39

3.18

Double

1.25

1.41

3.19

Currency

1.93

2.27

3.69

Variant

1.37

2.55

1.93

Note that in every case but one, multiplication is faster than division. The only exception is multiplication and integer division of variables with the integer data type, which yields the same execution time in this test.

Because multiplication is faster than division, it is obviously preferable to multiply rather than divide whenever possible. Notice also that integer division using variables of the integer data type can actually be faster than floating point division using floating-point variables. If integer division is carried out with the long data type, the execution time is slightly longer than single or double floating-point division. Thus, if the two values being divided are of the integer data type, and the result is to be an integer, then integer division is faster than floating-point division. In most cases, however, when dividing two quantities, floating-point math is needed because a decimal number may result. Thus, only use integer division when you don't need or won't get a remainder, that is, decimal point.


Tip

Multiplication is faster than division.


Note

Strategies are presented in the book Visual Basic 4.0 Performance Tuning and Optimization to use multiplication instead of division in applications, as well as the resultant speed increase that can be obtained. Consult this book for more information.

Graphics Controls and Methods

Windows is an environment that relies heavily on graphics. Unlike the character-based environment of MS-DOS, Windows uses a friendly graphical user interface, or GUI, that makes simple programming tasks easier for the user. The graphical interface also provides a common platform upon which all Windows applications are based.

Because graphics are such an inherent part of Windows, it is natural to assume that graphics issues can have a large impact on performance. When writing programs using Visual Basic, the programmer is presented with an easy-to-use programming interface. Using this environment, it is very easy to make extensive use of graphics, bitmaps, animation, and other techniques to enhance the appearance and functionality of an application. With the use of graphics, however, comes the responsibility to make sure they are used in the most optimal way for the sake of performance. This section points out some of the significant points to remember when working with graphics controls and methods.


Tip

Graphics created at design-time using controls are faster than using graphics methods at run-time.

When graphics controls are explicitly defined at design-time, they are faster than graphics methods (VB statements that generate graphics such as Line). On the other hand, when graphics controls are created dynamically at run-time, they are only very slightly faster than graphics methods. The total time required to use controls compared with methods depends on the quantity and type of graphics being displayed. As a rule, however, the results should favor the use of graphics controls, particularly when they are all defined and created in design mode. Keep in mind that, although the ultimate load and display time of the application that uses controls is faster than that of methods, the application that uses controls has a larger executable file size and requires more memory and resources. This is because the controls must be stored as a part of the form rather than created "on-the-fly" at run-time. The time it takes for the interpreter to implement the methods takes more cumulative time than when the graphics are "pre-defined" with a graphics control.

This does not automatically mean that you should rule out the use of graphics methods in every instance. The pros and cons of each approach must also be considered in light of the way in which they are used. One primary benefit of graphics methods is that some of the graphics methods can produce graphics that are impossible to create using the standard graphics controls. For example, the PSet method can plot a single point, which is impossible when using a graphics control unless you draw a very small line.

Another benefit of graphics methods is that they do not consume the memory and resources that controls do. Furthermore, a great deal of graphics can often be generated quickly using code where using controls may be otherwise cumbersome or impractical. For example, consider the case when you need to draw 100 grid lines for a graph. Rather than constructing 100 controls, it may be easier to write a simple code loop that plotted the 100 lines on the form using graphics methods. The 100 graphics controls would also take up space in memory, because they are controls that must be addressable until they are destroyed.

Graphics methods, however, are simply painted on the form and cannot be "addressed" as graphical entities like controls can. Thus, they take up much fewer resources and memory than the graphics methods do. If memory is limited, the benefits of using methods may outweigh the speed benefits of using controls. After all, when memory becomes scarce, Windows itself can bog down and make every operation in Windows run slower. Thus, speed may become an issue after all.

As expected, graphics methods also have disadvantages as compared to the other techniques. While they often paint and redisplay faster on forms, they take more processing time to create and place on the form. Furthermore, they often require more programming effort to implement. When working with graphics methods, the programmer must write code to perform simple drawing and positioning of graphics on the form. The programmer not only has the responsibility of calculating the size and position of each graphic drawn, he also must define the shape and its dimensions. Controls, on the other hand, have a predetermined shape that can be changed in code but do not have to be changed or specified in code. When using methods, the programmer also cannot see the graphics on the form at design-time, which is a big disadvantage when trying to put together graphics quickly. Thus, graphics controls are often easier to use and faster to display than graphics created using graphics methods. But this comes at the cost of additional memory and resource consumption, which can often be a significant issue when memory and resources are scarce.


Tip

Printing graphics to a printer takes longer than displaying those graphics on screen.

The programmer should expect that printing graphics to the printer will, in almost all cases, take considerably longer to display graphics on the screen. Video display resolutions have, over the years, been more standardized than the various printers and printer drivers out on the market. Because the printer is a mechanical device, while the video display is essentially an electronic device, the developer should always favor providing graphics output to the screen when possible, and not just the printer.

It is wise, for example, to provide the user with a print preview screen in a word processing application to show the user what the document will look like out on the printer before it is actually sent there. Because it takes so much longer to print the document than to view it, the user can make changes and see on the monitor what the document will look like on the printed page. In such a manner, the user can refine the document until he gets it just the way he wants it, and then he can send it out to the printer. The user may become very frustrated if he has to wait three minutes every time he wants to print his document and make a change, not to mention all the paper he will waste in the process.

Therefore, when considering printing speed versus display speed, keep in mind that display speed is almost always faster by far, and that your application should always allow the user to see what he is about to print before it gets printed. In this manner, he can only print when he needs to and is not dependent on a slow printer for making changes to something he could do much faster by seeing it on the screen. Furthermore, by keeping the graphics resolution lower and using monochrome if a printer does not support color, the performance of the printing process can be improved dramatically.

Displaying Pictures

In addition to displaying lines and shapes, programmers often wish to incorporate pictures into their applications for animation and other visual effects. Visual Basic has several mechanisms that allow the programmer to display and manipulate pictures. Visual Basic forms have a Picture property that can be set to a graphics file. Furthermore, two controls can be used to display pictures inside forms. These two controls are the PictureBox, or picture control, and the Image control. These two controls differ somewhat in functionality and performance.

Picture Controls

Picture controls are very powerful and useful for displaying pictures. The picture control, which is located on the Visual Basic tool palette, can be selected and the programmer can place the control on the form. The programmer essentially defines the size and location of the picture control on the form. The picture control has a property called picture that can be set at design-time to a graphics file of format .BMP, .DIB, .ICO, and .WMF. The graphics file can also be loaded into the control at run-time, which is to be discussed later in the chapter.

When the programmer places a picture control on a form, he has control over the control's dimensions, in particular, the width and height. When a picture is loaded into the control, the picture consumes as much space as when created. In other words, if you have a bitmap that is two inches wide by two inches high, and you create a picture control four inches by four inches, the bitmap will fit into the picture control and there will be twelve square inches of space remaining in the picture control. In order to re-size the picture control so that there is no extra space, the programmer can set the AutoSize property to True, which re-sizes the bitmap just to fit around the bitmap image.

Controls such as labels and command buttons can be placed inside picture controls and, in essence, become "child controls" of the picture control, which is the parent. This makes the picture control not only a powerful mechanism for displaying pictures, but also for containing and manipulating groups of controls in addition to pictures.

Image Controls

An image control is another one of the three ways to display a picture in a Visual Basic application. In the Microsoft Programmer's Guide, an image control is defined as "a rectangular area into which you can load picture files." The picture file formats supported are .BMP, .DIB, .ICO, and .WMF. Image controls, unlike picture controls, allow the programmer to stretch, or resize, pictures so that they utilize the entire rectangular area of the image control. When using a picture control, the size of the picture being loaded into the control cannot change. With image controls, however, the Stretch property can be set to True, which automatically resizes the picture to take up the entire region of space defined by the image control.

As was true with the picture control, the bitmap is smaller than the actual control that stores the picture. The image control does not have an Autosize property like the picture control does. Thus, we cannot automatically size the control to fit the bitmap. In the case of the image control, we have to go the other way around. By setting the Stretch property to True, we can increase or decrease the size of the picture to fill up all the space in the image.

Image controls are also different from picture controls in that the programmer cannot place "child" controls inside them. Nothing can be placed inside an image control other than an image, which must be set using the Picture property or by being loaded into the control at run-time.


Tip

Image controls are faster than picture controls.

A test application was constructed that loads a series of picture controls which contained a bitmap, and another application loaded a series of image controls which contained the same bitmap. Timings were taken and, in all cases, the application loaded faster using the image controls than it did using the picture controls.

One of the reasons that image controls are faster is that the underlying structure of the image control is simpler than that of the picture control. A picture control is essentially a window of the type similar to a Visual Basic form. First of all, it can contain child controls like a form can. This means that the picture control must keep track of all its "children" controls and pass windows messages to them. This requires additional complexity and processing time which are not required of the image control. Furthermore, the picture control has more properties, events, and methods than the image control. Table 11.10 shows the events for an image control.

Click

Double-Click

DragDrop

DragOver

MouseDown

MouseUp

MouseMove

Compare this to the events for a picture control, shown in Table 11.11.

Click

Double-Click

DragDrop

DragOver

MouseDown

MouseUp

MouseMove

Change

GotFocus

KeyDown

KeyUp

KeyPress

LinkClose

LinkError

LinkNotify

LinkOpen

LostFocus

Paint

Resize

Having examined the much larger list of events for the picture control, you will notice several additional capabilities the picture control has that the image control does not have. Picture controls can recognize keyboard input, initiate OLE and DDE links, recognize when focus is lost, and are notified by VB when they are painted and resized. This means that Visual Basic must do more work and message handling with picture controls than it must with image controls due to the increased functionality of the picture control.

This additional processing takes extra time, which is the primary reason for the decrease in performance of the picture control versus the image control. Unlike the image control, the picture control is an actual window, which means it consumes a greater amount of resources. In some cases, it may be desirable to take advantage of the picture control's additional functionality. In such cases, functionality is traded for decreased performance. In many other cases, however, this added functionality is not needed or used. In those cases, it is certainly preferable to use an image control rather than a picture control. If the only thing you have the user do with a picture control is click on it or drag and drop it, you are needlessly wasting resources and degrading performance. The image control would be sufficient and faster. If, as in the examples presented in this chapter, you are simply displaying a picture and do not need to worry about keyboard input, OLE, and resizing and painting of the control, then the image control is clearly desirable.

The image control may also be preferred, in certain cases, for its functionality. While it is not as complex as the picture control, developers often take advantage of the "stretch" feature of the image control, which may make it the ideal candidate separate from performance considerations. Keep in mind, however, that the increase of the performance may become more dramatic as the number of controls, resolution of the pictures in those controls, and interaction between controls and the forms used to contain them increases.

Summary

This chapter has presented some of the key points and strategies to apply to Visual Basic programs to make them as efficient as possible. Having read this chapter, one would rightly conclude that the subject of performance is rather broad. The subject can, at times, be a bit daunting, but this chapter should make the subject more clear and easier to tackle. The guidelines pointed out in this chapter can be applied in a general sense, because they are a result of verification through the programs the authors have created and tested. Visual Basic 4.0 provides a great deal more functionality than its predecessor and, in many cases, better performance as well. Even though the language has been optimized, however, the programmer still shares the responsibility of writing applications that are robust, efficient, and pleasing to the user. As a result of this chapter, the authors hope this becomes more of a reality for you.

Note

For a much more detailed and thorough treatment of Visual Basic performance tuning and optimization, remember to consider Sams' comprehensive book on performance tuning, Visual Basic 4 Performance Tuning and Optimization, by Keith Brophy and Tim Koets. This book covers a much wider range of performance issues than can be addressed here and is recommended for those who wish to have an in-depth understanding of performance issues.

Previous Page TOC Next Page